21 research outputs found

    Efficient Localization of Discontinuities in Complex Computational Simulations

    Full text link
    Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches

    A continuous analogue of the tensor-train decomposition

    Full text link
    We develop new approximation algorithms and data structures for representing and computing with multivariate functions using the functional tensor-train (FT), a continuous extension of the tensor-train (TT) decomposition. The FT represents functions using a tensor-train ansatz by replacing the three-dimensional TT cores with univariate matrix-valued functions. The main contribution of this paper is a framework to compute the FT that employs adaptive approximations of univariate fibers, and that is not tied to any tensorized discretization. The algorithm can be coupled with any univariate linear or nonlinear approximation procedure. We demonstrate that this approach can generate multivariate function approximations that are several orders of magnitude more accurate, for the same cost, than those based on the conventional approach of compressing the coefficient tensor of a tensor-product basis. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. To support these developments, we describe continuous versions of an approximate maximum-volume cross approximation algorithm and of a rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate that our technique improves accuracy and robustness, compared to TT and quantics-TT approaches with fixed parameterizations, of high-dimensional integration, differentiation, and approximation of functions with local features such as discontinuities and other nonlinearities

    Bayesian Identification of Nonseparable Hamiltonian Systems Using Stochastic Dynamic Models

    Full text link
    This paper proposes a probabilistic Bayesian formulation for system identification (ID) and estimation of nonseparable Hamiltonian systems using stochastic dynamic models. Nonseparable Hamiltonian systems arise in models from diverse science and engineering applications such as astrophysics, robotics, vortex dynamics, charged particle dynamics, and quantum mechanics. The numerical experiments demonstrate that the proposed method recovers dynamical systems with higher accuracy and reduced predictive uncertainty compared to state-of-the-art approaches. The results further show that accurate predictions far outside the training time interval in the presence of sparse and noisy measurements are possible, which lends robustness and generalizability to the proposed approach. A quantitative benefit is prediction accuracy with less than 10% relative error for more than 12 times longer than a comparable least-squares-based method on a benchmark problem

    An Incremental Tensor Train Decomposition Algorithm

    Full text link
    We present a new algorithm for incrementally updating the tensor-train decomposition of a stream of tensor data. This new algorithm, called the tensor-train incremental core expansion (TT-ICE) improves upon the current state-of-the-art algorithms for compressing in tensor-train format by developing a new adaptive approach that incurs significantly slower rank growth and guarantees compression accuracy. This capability is achieved by limiting the number of new vectors appended to the TT-cores of an existing accumulation tensor after each data increment. These vectors represent directions orthogonal to the span of existing cores and are limited to those needed to represent a newly arrived tensor to a target accuracy. We provide two versions of the algorithm: TT-ICE and TT-ICE accelerated with heuristics (TT-ICEβˆ—^*). We provide a proof of correctness for TT-ICE and empirically demonstrate the performance of the algorithms in compressing large-scale video and scientific simulation datasets. Compared to existing approaches that also use rank adaptation, TT-ICEβˆ—^* achieves 57Γ—\times higher compression and up to 95% reduction in computational time.Comment: 22 pages, 7 figures, for the python code of TT-ICE and TT-ICEβˆ—^* algorithms see https://github.com/dorukaks/TT-IC
    corecore